25 research outputs found

    Points and Stripes: A Novel Technique for Masking Biological Motion Point-Light Stimuli

    Get PDF
    Human articulated motion can be readily recognized robustly even from impoverished so-called point-light displays. Such sequence information is processed by separate visual processing channels recruiting different stages at low and intermediate levels of the cortical visual processing hierarchy. The different contributions that motion and form information make to form articulated, or biological, motion perception are still under investigation. Here we investigate experimentally whether and how specific spatio-temporal features, such as extrema in the motion energy or maximum limb expansion, indicated by the lateral and longitudinal extension, constrain the formation of the representations of articulated body motion. In order to isolate the relevant stimulus properties we suggest a novel masking technique, which allows to selectively impair the ankle information of the body configuration while keeping the motion of the point-light locations intact. Our results provide evidence that maxima in feature channel representations, e.g., the lateral or longitudinal extension, define elemental features to specify key poses of biological motion patterns. These findings provide support for models which aim at automatically building visual representations for the cortical processing of articulated motion by identifying temporally localized events in a continuous input stream

    Adaptive learning in a compartmental model of visual cortex—how feedback enables stable category learning and refinement

    Get PDF
    The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, but both belong to the category of felines. In other words, tigers and leopards are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in the computational neurosciences. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of (sub-) category representations. We demonstrate the temporal evolution of such learning and show how the approach successully establishes category and subcategory representations

    Embodied learning of a generative neural model for biological motion perception and inference

    Get PDF
    Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons

    Forgivingness of an Anteromedially Positioned Small Locked Plate for High Tibial Osteotomy in Case of Overcorrection and Lateral Hinge Fracture

    Get PDF
    High tibial osteotomy (HTO) represents a sensible treatment option for patients with moderate unicondylar osteoarthritis of the knee and extraarticular malalignment. The possibility of a continuously variable correction setting and a surgical approach low in complications has meant that the medial opening osteotomy has prevailed over the past decades. The objective of the present study was to determine whether anteromedially positioned small plates are nevertheless forgiving under biomechanically unfavourable conditions (overcorrection and lateral hinge fracture). In this study, a simulated HTO was performed on composite tibiae with a 10-mm wedge and fixed-angle anteromedial osteosynthesis with a small implant. Force was applied axially in a neutral mechanical axis, a slight and a marked overcorrection into valgus, with and without a lateral hinge fracture in each case. At the same time, a physiological gait with a dual-peak force profile and a peak load of 2.4 kN was simulated. Interfragmentary motion and rigidity were determined. The rigidity of the osteosynthesis increased over the cycles investigated. A slight overcorrection into valgus led to the lowest interfragmentary motion, compared with pronounced valgisation and neutral alignment. A lateral hinge fracture led to a significant decrease in rigidity and increase in interfragmentary motion. However, in no case was the limit of 1 mm interfragmentary motion critical for osteotomy healing exceeded. The degree of correction of the leg axis, and the presence of a lateral hinge fracture, have an influence on rigidity and interfragmentary motion. From a mechanically neutral axis ranging up to pronounced overcorrection, the implant investigated offers sufficient stability to allow healing of the osteotomy, even if a lateral hinge fracture is present

    Utilizing the Capabilities Offered by Eye-Tracking to Foster Novices' Comprehension of Business Porcess Models

    Get PDF
    Business process models constitute fundamental artifacts for enterprise architectures as well as for the engineering of processes and information systems. However, less experienced stakeholders (i.e., novices) face a wide range of issues when trying to read and comprehend these models. In particular, process model comprehension not only requires knowledge on process modeling notations, but also skills to visually and correctly interpret the models. In this context, many unresolved issues concerning the factors hindering process model comprehension exist and, hence, the identification of these factors becomes crucial. Using eye-tracking as an instrument, this paper presents the results obtained of a study, in which we analyzed eye-movements of novices and experts, while comprehending process models expressed in terms of the Business Process Model and Notation (BPMN) 2.0. Further, recorded eye-movements are visualized as scan paths to analyze the applied comprehension strategies. We learned that experts comprehend process models more effectively than novices. In addition, we observed particular patterns for eye-movements (e.g., back-and-forth saccade jumps) as well as different strategies of novices and experts in comprehending process models

    What, where, and when? Mechanisms of learning biological motion representations

    No full text
    The understanding and recognition of human actions is one of the major challenges for technical systems aiming at visual behavior analysis. Evidences from psychophysical and neurophysiological studies provide indications on the feature characteristics and neural processing principles involved in the perception of biological motion sequences. Modeling efforts from the field of computational neuroscience complement these empirical findings by proposing potential functional mechanisms and learning schemes enabling the establishment and recognition of biological motion representations and show how such principles can be transferred to technical domains. First, results of psychophysical investigations are presented that demonstrate significant increases in the human recognition performance for motion (sub-) sequences containing highly articulated poses, which co-occur with local extrema in the motion energy and the extension of a body. Such key poses thus qualify as candidates to establish biological motion representations. Second, based on these findings, a neural model for the learning of biological motion representations is presented. The model combines hierarchical feedforward and feedback processing along the ventral (form; what) and dorsal (motion; where) pathways with an unsupervised Hebbian learning mechanism for the learning of prototypical form and motion representations. More specifically, gated learning in the form pathway realizes the selective learning of highly articulated postures. Sequence selective representations are established using temporal association learning driven by motion and form input. The proposed model shows how the unsupervised learning of key poses can form the basis for the establishment of biological motion representations and gives a potential explanation for empirically observed phenomena, such as implied motion perception. Third, as a transfer to technical application scenarios, a real-time biologically inspired action recognition system is presented which automatically selects key poses in action sequences and employs a deep convolutional neural network (DCNN) to learn class-specific pose representations. The network is mapped onto a neuromorphic platform, enabling the real-time (~1000 fps) and energy-efficient (~70 mW) assignment of key poses to action classes. Last, it is shown how an associative learning scheme similar to the one applied in the neural model for the learning of biological motion representations can be used for the learning of visual category and subcategory representations. Here, instar learning is used to learn representations of visual categories, while outstar learning on the other hand is applied to establish representations of the expected input distribution. The category specific pattern is propagated back to the preceding stage where a residual signal reflecting the difference to the current input signal is derived. This difference is emphasized by modulation of the input with the residual signal and a subsequent normalization. If the difference is large enough, a new subcategory representation is established

    Learning representations of animated motion sequences-A neural model

    No full text
    Abstract. The detection and categorization of animate motions is a crucial task underlying social interaction and decision-making. Neural representations of perceived animate objects are built into cortical area STS which is a region of convergent input from intermediate level form and motion representations. Populations of STS cells exist which are selectively responsive to specific action sequences, such as walkers. It is still unclear how and to which extent form and motion information contribute to the generation of such representations and what kind of mechanisms are utilized for the learning processes. The paper develops a cortical model architecture for the unsupervised learning of animated motion sequence representations. We demonstrate how the model automatically selects significant motion patterns as well as meaningful static snapshot categories from continuous video input. Such keyposes correspond to articulated postures which are utilized in probing the trained network to impose implied motion perception from static views. We also show how sequence selective representations are learned in STS by fusing snapshot and motion input and how learned feedback connections enable making predictions about future input. Network simulations demonstrate the computational capacity of the proposed model
    corecore